Learning Fair Representations
نویسندگان
چکیده
We propose a learning algorithm for fair classification that achieves both group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole), and individual fairness (similar individuals should be treated similarly). We formulate fairness as an optimization problem of finding a good representation of the data with two competing goals: to encode the data as well as possible, while simultaneously obfuscating any information about membership in the protected group. We show positive results of our algorithm relative to other known techniques, on three datasets. Moreover, we demonstrate several advantages to our approach. First, our intermediate representation can be used for other classification tasks (i.e., transfer learning is possible); secondly, we take a step toward learning a distance metric which can find important dimensions of the data for classification.
منابع مشابه
Learning Adversarially Fair and Transferable Representations
In this work, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream. We envision a scenario where learned representations may be handed off to other entities with unknown objectives. We propose and explore adversarial representation learning as a natural method of ensuring those entities will act fairly, and connect group fairness (demographic pa...
متن کاملModeling Classi cation and Inference Learning
Human categorization research is dominated by work in classi cation learning. The eld may be in danger of equating the classi cation learning paradigm with the more general phenomenon of category learning. This paper compares classi cation and inference learning and nds that different patterns of behavior emerge depending on which learning mode is engaged. Inference learning tends to focus subj...
متن کاملCS 224W Project Final Report: Learning Fair Graph Representations
In this paper we present the Variational Fair Graph Autoencoder (VFGAE) as a model to learn feature representations on graphs that are invariant to a specified nuisance variable s. This model is brings together previous work on the Variational Graph Autoencoder proposed by Kipf and Welling, and the Variational Fair Autoencoder proposed by Louizos et al. To take into account the graph structure,...
متن کاملData Decisions and Theoretical Implications when Adversarially Learning Fair Representations
How can we learn a classier that is “fair” for a protected or sensitive group, when we do not know if the input to the classier belongs to the protected group? How can we train such a classier when data on the protected group is dicult to aain? In many settings, nding out the sensitive input aribute can be prohibitively expensive even during model training, and sometimes impossible durin...
متن کاملFair Processes for Priority Setting: Putting Theory into Practice; Comment on “Expanded HTA: Enhancing Fairness and Legitimacy”
Embedding health technology assessment (HTA) in a fair process has great potential to capture societal values relevant to public reimbursement decisions on health technologies. However, the development of such processes for priority setting has largely been theoretical. In this paper, we provide further practical lead ways on how these processes can be implemented. We first present the misconce...
متن کاملProvably Fair Representations
Machine learning systems are increasingly used to make decisions about people’s lives, such as whether to give someone a loan or whether to interview someone for a job. This has led to considerable interest in making such machine learning systems fair. One approach is to transform the input data used by the algorithm. This can be achieved by passing each input data point through a representatio...
متن کامل